Goto

Collaborating Authors

 Roane County


Agency, Affordances, and Enculturation of Augmentation Technologies

Duin, Ann Hill, Pedersen, Isabel

arXiv.org Artificial Intelligence

Augmentation technologies are undergoing a process of enculturation due to many factors, one being the rise of artificial intelligence (AI), or what the World Intellectual Property Organization (WIPO) terms the AI wave or AI boom. Chapter 3 focuses critical attention on the hyped assumption that sophisticated, emergent, and embodied augmentation technologies will improve lives, literacy, cultures, arts, economies, and social contexts. The chapter begins by discussing the problem of ambiguity with AI terminology, which it aids with a description of the WIPO Categorization of AI Technologies Scheme. It then draws on media and communication studies to explore concepts such as agents, agency, power, and agentive relationships between humans and robots. The chapter focuses on the development of non-human agents in industry as a critical factor in the rise of augmentation technologies. It looks at how marketing communication enculturates future users to adopt and adapt to the technology. Scholars are charting the significant ways that people are drawn further into commercial digital landscapes, such as the Metaverse concept, in post-internet society. It concludes by examining recent claims concerning the Metaverse and augmented reality.


Context-Awareness and Interpretability of Rare Occurrences for Discovery and Formalization of Critical Failure Modes

Polavaram, Sridevi, Zhou, Xin, Ravi, Meenu, Zarei, Mohammad, Srivastava, Anmol

arXiv.org Artificial Intelligence

--Vision systems are increasingly deployed in critical domains such as surveillance, law enforcement, and transportation. However, their vulnerabilities to rare or unforeseen scenarios pose significant safety risks. T o address these challenges, we introduce Context-A wareness and Interpretability of Rare Occurrences (CAIRO), an ontology-based human-assistive discovery framework for failure cases (or CP - Critical Phenomena) detection and formalization. CAIRO by design incentivizes human-in-the-loop for testing and evaluation of criticality that arises from misdetections, adversarial attacks, and hallucinations in AI black-box models. Our robust analysis of object detection model(s) failures in automated driving systems (ADS) showcases scalable and interpretable ways of formalizing the observed gaps between camera perception and real-world contexts, resulting in test cases stored as explicit knowledge graphs (in OWL/XML format) amenable for sharing, downstream analysis, logical reasoning, and accountability. I NTRODUCTION Formal verification techniques are a norm in chip design, but they remain elusive in computer vision (CV) applications. The reason being CV applications are deemed open-ended, often trained on millions of data and billions of parameters to learn a few hundreds of labels. Finetuning practices are commonly used to tailor them to specific needs, but with no standard testing procedures in place providing guidance for their application to ensure fail-safe behaviors, critical systems like Autonomous V ehicles (A V) are bound to fail [1].


Using Reinforcement Learning to Integrate Subjective Wellbeing into Climate Adaptation Decision Making

Vandervoort, Arthur, Costa, Miguel, Petersen, Morten W., Drews, Martin, Haustein, Sonja, Morrissey, Karyn, Pereira, Francisco C.

arXiv.org Artificial Intelligence

Subjective wellbeing is a fundamental aspect of human life, influencing life expectancy and economic productivity, among others. Mobility plays a critical role in maintaining wellbeing, yet the increasing frequency and intensity of both nuisance and high-impact floods due to climate change are expected to significantly disrupt access to activities and destinations, thereby affecting overall wellbeing. Addressing climate adaptation presents a complex challenge for policymakers, who must select and implement policies from a broad set of options with varying effects while managing resource constraints and uncertain climate projections. In this work, we propose a multi-modular framework that uses reinforcement learning as a decision-support tool for climate adaptation in Copenhagen, Denmark. Our framework integrates four interconnected components: long-term rainfall projections, flood modeling, transport accessibility, and wellbeing modeling. This approach enables decision-makers to identify spatial and temporal policy interventions that help sustain or enhance subjective wellbeing over time. By modeling climate adaptation as an open-ended system, our framework provides a structured framework for exploring and evaluating adaptation policy pathways. In doing so, it supports policymakers to make informed decisions that maximize wellbeing in the long run.


Impact of Argument Type and Concerns in Argumentation with a Chatbot

Chalaguine, Lisa A., Hunter, Anthony, Hamilton, Fiona L., Potts, Henry W. W.

arXiv.org Artificial Intelligence

Conversational agents, also known as chatbots, are versatile tools that have the potential of being used in dialogical argumentation. They could possibly be deployed in tasks such as persuasion for behaviour change (e.g. persuading people to eat more fruit, to take regular exercise, etc.) However, to achieve this, there is a need to develop methods for acquiring appropriate arguments and counterargument that reflect both sides of the discussion. For instance, to persuade someone to do regular exercise, the chatbot needs to know counterarguments that the user might have for not doing exercise. To address this need, we present methods for acquiring arguments and counterarguments, and importantly, meta-level information that can be useful for deciding when arguments can be used during an argumentation dialogue. We evaluate these methods in studies with participants and show how harnessing these methods in a chatbot can make it more persuasive.